How to Hack Like a Ghost by Sparc Flow

How to Hack Like a Ghost by Sparc Flow

Author:Sparc Flow [Flow, Sparc]
Language: eng
Format: azw3
ISBN: 9781718501270
Publisher: No Starch Press
Published: 2021-05-10T16:00:00+00:00


NOTE

In a multimaster setup, we will have three or more replicas of each of these pods, but only one active pod per service at any given time.

We actually already interacted with the master node when using kubectl apply commands to send manifest files. Kubectl is a wrapper that sends HTTP requests to the all-important API server pod, the main entry point to retrieve and persist the famous desired state of the cluster. Here is a typical configuration one may use to reach the Kube cluster (~/.kube/config):

apiVersion: v1 kind: Config clusters: - cluster: certificate-authority: /root/.minikube/ca.crt server: https://192.168.99.100:8443 name: minikube--snip-- users: - name: sparc user: client-certificate: /root/.minikube/client.crt client-key: /root/.minikube/client.key--snip--

Our API server URL in this case is https://192.168.99.100. Think of it this way: the API server is the only pod allowed to read/write the desired state in the database. Want to list pods? Ask the API server. Want to report a pod failure? Tell the API server. It is the main orchestrator that conducts the complex symphony that is Kubernetes.

When we submitted our deployment file to the API server through kubectl (HTTP), it made a series of checks (authentication and authorization, which we will cover in Chapter 8) and then wrote that deployment object in the etcd database, which is a key-value database that maintains a consistent and coherent state across multiple nodes (or pods) using the Raft consensus algorithm. In the case of Kube, etcd describes the desired state of the cluster, such as how many pods there are, their manifest files, service descriptions, node descriptions, and so on.

Once the API server writes the deployment object to etcd, the desired state has officially been altered. It notifies the callback handler that subscribed to this particular event: the deployment controller, another component running on the master node.

All Kube interactions are based on this type of event-driven behavior, which is a reflection of etcd’s watch feature. The API server receives a notification or an action. It reads or modifies the desired state in etcd, which triggers an event delivered to the corresponding handler.

The deployment controller asks the API server to send back the new desired state, notices that a deployment has been initialized, but does not find any reference to the group of pods it is supposed to manage. It resolves this discrepancy by creating a ReplicaSet, an object describing the replication strategy of a group of pods.

This operation goes through the API server again, which updates the state once more. This time, however, the event is sent to the ReplicaSet controller, which in turn notices a mismatch between the desired state (a group of two pods) and reality (no pods). It proceeds to create the definition of the containers.

This process (you guessed it) goes through the API server again, which, after modifying the state, triggers a callback for pod creation, which is monitored by the kube-scheduler (a dedicated pod running on the master node).

The scheduler sees two pods in the database in a pending state. Unacceptable. It runs



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.